Boosted Mixture of Experts: An Ensemble Learning Scheme
نویسندگان
چکیده
منابع مشابه
Boosted Mixture Of Experts: An Ensemble Learning Scheme
We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hintnon, 1991), applied to classification, or a variant of the boosting algorithm (Schapire, 1990). As a ...
متن کاملEnsemble Learning with Local Experts
Ensemble learning methods have received considerable attention in the past few years. Various methods for combining several learning experts have been developed and used in different domains of machine learning. Many works have focused on decision fusion of different exports. Some methods try to train all the experts on the same training data and then use statistical techniques to combine the r...
متن کاملBoosted mixture learning of Gaussian mixture HMMs for speech recognition
In this paper, we propose a novel boosted mixture learning (BML) framework for Gaussian mixture HMMs in speech recognition. BML is an incremental method to learn mixture models for classification problem. In each step of BML, one new mixture component is calculated according to functional gradient of an objective function to ensure that it is added along the direction to maximize the objective ...
متن کاملCombining Classifiers and Learning Mixture-of-Experts
Expert combination is a classic strategy that has been widely used in various problem solving tasks. A team of individuals with diverse and complementary skills tackle a task jointly such that a performance better than any single individual can make is achieved via integrating the strengths of individuals. Started from the late 1980’ in the handwritten character recognition literature, studies ...
متن کاملDomain Attention with an Ensemble of Experts
An important problem in domain adaptation is to quickly generalize to a new domain with limited supervision givenK existing domains. One approach is to retrain a global model across all K + 1 domains using standard techniques, for instance Daumé III (2009). However, it is desirable to adapt without having to reestimate a global model from scratch each time a new domain with potentially new inte...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neural Computation
سال: 1999
ISSN: 0899-7667,1530-888X
DOI: 10.1162/089976699300016737